10 research outputs found

    Non-monotonic inference properties for assumption-based argumentation

    Get PDF
    Cumulative Transitivity and Cautious Monotonicity are widely considered as important properties of non-monotonic inference and equally as regards to information change. We propose three novel formulations of each of these properties for Assumption-Based Argumentation (ABA)-an established structured argumentation formalism, and investigate these properties under a variety of ABA semantics

    Adapting the DF-QuAD algorithm to bipolar argumentation

    No full text
    We define a quantitative semantics for evaluating the strength of arguments in Bipolar Argumentation frameworks (BAFs) by adapting the Discontinuity-Free QuAD (DF-QuAD) algorithm previously used for evaluating the strength of arguments in Quantitative Argumentation Debates (QuAD) frameworks. We study the relationship between the new semantics and some existing semantics for other argumentation frameworks, as well as some properties of the semantics

    A Temporal Planning Example with Assumption-Based Argumentation

    Get PDF

    ABAplus: Attack Reversal in Abstract and Structured Argumentation with Preferences

    Get PDF
    We present ABAplus, a system that implements reasoning with the argumentation formalism ABA+. ABA+ is a structured argumentation formalism that extends Assumption-Based Argumentation (ABA) with preferences and accounts for preferences via attack reversal. ABA+ also admits as instance Preference-based Argumentation which accounts for preferences by reversing attacks in abstract argumentation (AA). ABAplus readily implements attack reversal in both AA and ABAstyle structured argumentation. ABAplus affords computation, visualisation and comparison of extensions under five argumentation semantics. It is available both as a stand-alone system and as a web application

    Argumentation for explainable scheduling

    No full text
    Mathematical optimization offers highly-effective tools for finding solutions for problems with well-defined goals, notably scheduling. However, optimization solvers are often unexplainable black boxes whose solutions are inaccessible to users and which users cannot interact with. We define a novel paradigm using argumentation to empower the interaction between optimization solvers and users, supported by tractable explanations which certify or refute solutions. A solution can be from a solver or of interest to a user (in the context of 'what-if' scenarios). Specifically, we define argumentative and natural language explanations for why a schedule is (not) feasible, (not) efficient or (not) satisfying fixed user decisions, based on models of the fundamental makespan scheduling problem in terms of abstract argumentation frameworks (AFs). We define three types of AFs, whose stable extensions are in one-to-one correspondence with schedules that are feasible, efficient and satisfying fixed decisions, respectively. We extract the argumentative explanations from these AFs and the natural language explanations from the argumentative ones

    Explanations by arbitrated argumentative dispute

    No full text
    Explaining outputs determined algorithmically by machines is one of the most pressing and studied problems in Artificial Intelligence (AI) nowadays, but the equally pressing problem of using AI to explain outputs determined by humans is less studied. In this paper we advance a novel methodology integrating case-based reasoning and computational argumentation from AI to explain outcomes, determined by humans or by machines, indifferently, for cases characterised by discrete (static) features and/or (dynamic) stages. At the heart of our methodology lies the concept of arbitrated argumentative disputesbetween two fictitious disputants arguing, respectively, for or against a case's output in need of explanation, and where this case acts as an arbiter. Specifically, in explaining the outcome of a case in question, the disputants put forward as arguments relevant cases favouring their respective positions, with arguments/cases conflicting due to their features, stages and outcomes, and the applicability of arguments/cases arbitrated by the features and stages of the case in question. We in addition use arbitrated dispute trees to identify the excess features that help the winning disputant to win the dispute and thus complement the explanation. We evaluate our novel methodology theoretically, proving desirable properties thereof, and empirically, in the context of primary legislation in the United Kingdom (UK), concerning the passage of Bills that may or may not become laws. High-level factors underpinning a Bill's passage are its content-agnostic features such as type, number of sponsors, ballot order, as well as the UK Parliament's rules of conduct. Given high numbers of proposed legislation (hundreds of Bills a year), it is hard even for legal experts to explain on a large scale why certain Bills pass or not. We show how our methodology can address this problem by automatically providing high-level explanations of why Bills pass or not, based on the given Bills and their content-agnostic features

    Modelling GDPR-Compliant Explanations for Trustworthy AI

    No full text
    Through the General Data Protection Regulation (GDPR), the European Union has set out its vision for Automated Decision- Making (ADM) and AI, which must be reliable and human-centred. In particular we are interested on the Right to Explanation, that requires industry to produce explanations of ADM. The High-Level Expert Group on Artificial Intelligence (AI-HLEG), set up to support the implementation of this vision, has produced guidelines discussing the types of explanations that are appropriate for user-centred (interactive) Explanatory Tools. In this paper we propose our version of Explanatory Narratives (EN), based on user-centred concepts drawn from ISO 9241, as a model for user-centred explanations aligned with the GDPR and the AI-HLEG guidelines. Through the use of ENs we convert the problem of generating explanations for ADM into the identification of an appropriate path over an Explanatory Space, allowing explainees to interactively explore it and produce the explanation best suited to their needs. To this end we list suitable exploration heuristics, we study the properties and structure of explanations, and discuss the proposed model identifying its weaknesses and strengths

    On the links between argumentation-based reasoning and nonmonotonic reasoning

    Get PDF
    In this paper we investigate the links between instantiated argumentation systems and the axioms for non-monotonic reasoning described in [15] with the aim of characterising the nature of argument based reasoning. In doing so, we consider two possible interpretations of the consequence relation, and describe which axioms are met by ASPIC+ under each of these interpretations. We then consider the links between these axioms and the rationality postulates. Our results indicate that argument based reasoning as characterised by ASPIC+ is—according to the axioms of [15]—non-cumulative and non-monotonic, and therefore weaker than the weakest non-monotonic reasoning systems considered in [15]. This weakness underpins ASPIC+ ’s success in modelling other reasoning systems. We conclude by considering the relationship between ASPIC+ and other weak logical systems
    corecore